Stephen Downes

Knowledge, Learning, Community

Select a newsletter and enter your email to subscribe:

Email:

Vision Statement

Stephen Downes works with the Digital Technologies Research Centre at the National Research Council of Canada specializing in new instructional media and personal learning technology. His degrees are in Philosophy, specializing in epistemology, philosophy of mind, and philosophy of science. He has taught for the University of Alberta, Athabasca University, Grand Prairie Regional College and Assiniboine Community College. His background includes expertise in journalism and media, both as a prominent blogger and as founder of the Moncton Free Press online news cooperative. He is one of the originators of the first Massive Open Online Course, has published frequently about online and networked learning, has authored learning management and content syndication software, and is the author of the widely read e-learning newsletter OLDaily. Downes is a member of NRC's Research Ethics Board. He is a popular keynote speaker and has spoken at conferences around the world.

Stephen Downes Photo
Stephen Downes, stephen@downes.ca, Casselman Canada

Ignorant, Irresponsible, and Privileged: A Critique of the CCCC President's Message
77807 image icon

Michelle Kassorla takes speaker Jennifer Sano-Franchini to task for an anti-AI speeck. Sano-Franchini is president of the Conference in Composition and College Communication (CCCC) and argued "that Generative AI was not representative of the language of linguistic minorities and that it would destroy their history and their linguistic diversity." Kassorla asks, "How much respect does the minority dialect of my student from Nepal or Sudan or the inner city get when they try to fill out a job application or even write a college essay to get into her esteemed university?" There's a lot more here. I didn't see the speech and can't comment on how accurate the critique is, but my inclination is to agree with the criticism. "At this moment in time, writing teachers and the profession of English can position themselves to begin the hard work of preparing our students for a future with AI. Or not." Preventing AI? Not on the table.

Today: Total: Michelle Kassorla, The Multimodal AI Project, 2025/04/11 [Direct Link]
Matter and Space
77806 image icon

George Siemens announces Matter and Space. "We've been working on building a new learning operating system that employs the critical capabilities that AI offers in an attempt to re-center and ground learning and sensemaking for humanity in a hopeful future." The site says, "Our platform is designed for those who need a different path—one that integrates lifelong learning with well-being, equipping individuals not just to advance in their careers but to flourish and grow as professionals and human beings." In a video, Siemens say, "our goal with Matter and Space is to know our learners better than they've ever been known by an education or really any other institution." They video shows a question-and-answer chatbot called Learning Environment 1 ("...but you can call me Elly.") A commercial product is expected in the fall. 

Today: Total: Paul LeBlanc, George Siemens, Tanya Gamby, Matter and Space, 2025/04/11 [Direct Link]
ATmosphere Report – #111
77805 image icon

This issue of the ATmosphere Report looks into the question of how Bluesky is going to make money (and it will have to make a lot of money to satisfy it's VC funders). It's not going well. Charging a subscription means they'll have to moderate content a lot better. The idea of corporate placement took a hit after Adobe was chased off the platform. They could set up a marketplace, but it's hard to make that work if other instances are completely independent from the host - something Bluesky isn't allowing just yet. Also, this: "Bluesky and ATProto are aiming to rebuild a social network for the entire globe. And with that come some very difficult challenges, such as that people will use a social network to instigate war and genocide."

Today: Total: Laurens Hof, fediversereport.com, 2025/04/11 [Direct Link]
Virtual reality: The widely-quoted media experts who are not what they seem
77804 image icon

This is an "investigation into the widely-quoted national media experts who either do not exist, or whose credentials have not been checked." What's happening is that non-experts are using AI and other tools in order to get quoted in the media as experts; their intent is (probably) to advertise products or services. I wouldn't be surprised if this phenomenon isn't more widespread than people recognize. Definitely we've see waves of 'instant experts' in specialist fields (such as online learning) each time a new tecnology appears on the scene. It's something I'm mindful of as I cite people in OLDaily, but there's no real way to judge other than by assessing the quality of their content - are they using terms correctly, do they credit sources, is there evidence to support their conclusions, are there data or examples I can see for myself, etc.? Via Dan Gillmor.

Today: Total: Rob Waugh, Press Gazette, 2025/04/11 [Direct Link]
We Need an Interventionist Mindset | TechPolicy.Press
77803 image icon

This doesn't read as a 100% human authored article (especially the references to 'Captain Sully' and 'Canadian geese'). Still, danah boyd's point here is a good one: instead of saying AI developers should 'fix' problems with their systems ("what María and I call techno-legal solutionism") they suggest that, because AI systems are complex and not deterministic in nature, we should talk of 'interventions'. This especially applies to 'human-in-the-loop' scenarios, which might be more problematic than people suspect: "positioning humans in a way where they are to decide when and where to override the AI often results in them landing in a position that Madeleine Elish calls a 'moral crumple zone'."

Today: Total: danah boyd, Tech Policy Press, 2025/04/10 [Direct Link]
Announcing the Agent2Agent Protocol
77802 image icon

This article announces Agent2Agent (A2A), Google's new open protocol supporting interoperable AI solutions. According to Google, "A2A is an open protocol that complements Anthropic's Model Context Protocol (MCP), which provides helpful tools and context to agents... A2A empowers developers to build agents capable of connecting with any other agent built using the protocol and offers users the flexibility to combine agents from various providers." The article lists a set of A2A design principles and describes briefly how it works (as illustrated) and offers a "real world" example in the form of "candidate sourcing" (video). "Read the full specification draft, try out code samples, and see example scenarios on the A2A website". More: Awesome A2A with examples and a list of frameworks, utilities and server implementations.

Today: Total: Google for Developers, 2025/04/10 [Direct Link]

Stephen Downes Stephen Downes, Casselman, Canada
stephen@downes.ca

Copyright 2025
Last Updated: Apr 12, 2025 07:37 a.m.

Canadian Flag Creative Commons License.